AUTHORS: Yang-Keun Ahn, Kwang-Soon Choi, Young-Choong Park
Download as PDF
ABSTRACT: This study proposes a method for controlling a 3D entity shown on a transparent display by distinguishing between the hand and object using a single depth camera and by extracting relevant information for each. The hardware configuration for controlling a 3D entity has been presented. To enable the control of a 3D entity, a target area where the distinction between the hand and the object is made is extracted in the preprocessing stage. The extracted target area images are normalized to an identical size, and projected onto Zernike moment basis functions. The database is built based on moment values converted from projected images to distinguish the hand from object when input images are presented on a real time basis. This study also presents a method for interacting with a 3D entity using hand and object. To validate the performance of the system, an evaluation of recognition rate and time was performed.
KEYWORDS: Hand Detection, Object Recognition, Air Touch, Transparent Display
REFERENCES:
[1] Jinha Lee, Alex Olwal, Hiroshi Ishii, and Cati boulanger, “SpaceTop : Intergration 2D and Spatial 3D Interaction in See-through Desktop Environment”. In Proceeding of SIGCHI, 2013, pp. 189-192.
[2] C. H. The, R T. Chin, 'On Image Analysis by the Moment Invariants,' IEEE Transactions on Pattern Analysis and Machine Intelligence, vol, 10, no.4, 1988.
[3] A. Khotanzad, Y. H. Hong, 'Invariant image recognition by Zernike moments,' IEEE Transactions on Pattern Analysis and Machine Intelligence, vol, 12, no.5, pp. 489-497, 1990.
[4] http://www.softkinetic.com